Skip to content

Instantly share code, notes, and snippets.

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@stylianipantela
stylianipantela / a4-design.md
Last active April 20, 2026 20:36
a4-design.md

Code reading questions

###1. What happens (in broad terms) if sys_remove is called on a file that is currently open by another running process? Will a read on the file by the second process succeed? A write? Why or why not? (1 point)

sys_remove removes the inode of the file from the file system. The processes that have a reference to the file will retain that reference and will be able to call read/write on that file and succeed. Processes trying to open the flie after sys_remove will not be allowed to do so. The file is purges from the filesystem after all open file descriptors fo that file have been closed.

###2. Describe the control flow, starting in the system call layer and proceeding through the VFS layer to reach SFS, that occurs for each of the following system calls. You need only trace the names of the functions that are called. Feel free to skip secondary or minor code paths that don't lead into SFS. (1 point)

###3. Now similarly describe the control flow within SFS for each of the same oper

@n1snt
n1snt / Oh my ZSH with zsh-autosuggestions zsh-syntax-highlighting zsh-fast-syntax-highlighting and zsh-autocomplete.md
Last active April 20, 2026 20:36
Oh my ZSH with zsh-autosuggestions zsh-syntax-highlighting zsh-fast-syntax-highlighting and zsh-autocomplete.md
@NGNL0221
NGNL0221 / lmspeed-verify.html
Last active April 20, 2026 20:37
LM Speed 验证用页面
<a href="https://lmspeed.net/provider/gcli-ggchan-dev" target="_blank"><img src="https://lmspeed.net/api/provider/claim-badge/429?claim=429-tbLSu_7UruAeNfp0iwCUVydQq09WBPJJ" alt="Verified on LM Speed" /></a>
@patyearone
patyearone / claude-code-statusline-guide.md
Last active April 20, 2026 20:32
Claude Code Status Line: Usage Limits, Pacing Targets, and Context Window - Complete guide with all the gotchas

Claude Code Status Line: Usage Limits, Pacing Targets, and Context Window

A complete guide to building a Claude Code status line that shows your 5-hour and weekly usage limits with color-coded pacing, reset times, and context window usage — all from data Claude Code already provides via stdin.

What you get:

your-project │ ctx 0% │ 5hr (3pm) 16/65% │ wk (fri,12pm) 44/57% │ Opus 4.6 (1M context)
@jph00
jph00 / solveit-ref.md
Last active April 20, 2026 20:30
Solveit reference

Solveit Reference

What is Solveit?

Solveit is a "Dialog Engineering" web application for interactive development. Unlike ChatGPT (pure chat) or Jupyter (pure code), Solveit combines three message types in one workspace: code execution, markdown notes, and AI prompts. Users build solutions incrementally—writing a few lines, understanding them, then continuing—rather than generating large code blocks.

The AI sees the full dialog context (code, outputs, notes, prompts) when responding -- but only those ABOVE the current message. Users can edit any message at any time, including AI responses—the dialog is a living document, not an append-only log.

The dialog is a running ipykernel instance. A "dialog" is like a "Jupyter notebook", and uses a compatible ipynb file format, but provides a superset of functionality (in particular, "prompt messages"). A "message" is like a "Jupyter cell", with additional attributes stored as ipynb cell metadata. Most standard jupyter functionality is supported (including cell